The quality of knowledge retrieval is crucial in knowledge-intensive conversations. Two common strategies to improve the retrieval quality are finetuning the retriever or generating a self-contained query, while they encounter heavy burdens on expensive computation and elaborate annotations. In this paper, we propose an unsupervised query enhanced approach for knowledge-intensive conversations, namely QKConv. There are three modules in QKConv: a query generator, an off-the-shelf knowledge selector, and a response generator. Without extra supervision, the end-to-end joint training of QKConv explores multiple candidate queries and utilizes corresponding selected knowledge to yield the target response. To evaluate the effectiveness of the proposed method, we conducted comprehensive experiments on conversational question-answering, task-oriented dialogue, and knowledge-grounded conversation. Experimental results demonstrate that QKConv achieves state-of-the-art performance compared to unsupervised methods and competitive performance compared to supervised methods.
translated by 谷歌翻译
Software engineers working with the same programming language (PL) may speak different natural languages (NLs) and vice versa, erecting huge barriers to communication and working efficiency. Recent studies have demonstrated the effectiveness of generative pre-training in computer programs, yet they are always English-centric. In this work, we step towards bridging the gap between multilingual NLs and multilingual PLs for large language models (LLMs). We release ERNIE-Code, a unified pre-trained language model for 116 NLs and 6 PLs. We employ two methods for universal cross-lingual pre-training: span-corruption language modeling that learns patterns from monolingual NL or PL; and pivot-based translation language modeling that relies on parallel data of many NLs and PLs. Extensive results show that ERNIE-Code outperforms previous multilingual LLMs for PL or NL across a wide range of end tasks of code intelligence, including multilingual code-to-text, text-to-code, code-to-code, and text-to-text generation. We further show its advantage of zero-shot prompting on multilingual code summarization and text-to-text translation. We will make our code and pre-trained models publicly available.
translated by 谷歌翻译
Despite the tremendous progress of Masked Autoencoders (MAE) in developing vision tasks such as image and video, exploring MAE in large-scale 3D point clouds remains challenging due to the inherent irregularity. In contrast to previous 3D MAE frameworks, which either design a complex decoder to infer masked information from maintained regions or adopt sophisticated masking strategies, we instead propose a much simpler paradigm. The core idea is to apply a \textbf{G}enerative \textbf{D}ecoder for MAE (GD-MAE) to automatically merges the surrounding context to restore the masked geometric knowledge in a hierarchical fusion manner. In doing so, our approach is free from introducing the heuristic design of decoders and enjoys the flexibility of exploring various masking strategies. The corresponding part costs less than \textbf{12\%} latency compared with conventional methods, while achieving better performance. We demonstrate the efficacy of the proposed method on several large-scale benchmarks: Waymo, KITTI, and ONCE. Consistent improvement on downstream detection tasks illustrates strong robustness and generalization capability. Not only our method reveals state-of-the-art results, but remarkably, we achieve comparable accuracy even with \textbf{20\%} of the labeled data on the Waymo dataset. The code will be released at \url{https://github.com/Nightmare-n/GD-MAE}.
translated by 谷歌翻译
While large pre-trained models have transformed the field of natural language processing (NLP), the high training cost and low cross-lingual availability of such models prevent the new advances from being equally shared by users across all languages, especially the less spoken ones. To promote equal opportunities for all language speakers in NLP research and to reduce energy consumption for sustainability, this study proposes an effective and energy-efficient framework GreenPLM that uses bilingual lexicons to directly translate language models of one language into other languages at (almost) no additional cost. We validate this approach in 18 languages and show that this framework is comparable to, if not better than, other heuristics trained with high cost. In addition, when given a low computational cost (2.5\%), the framework outperforms the original monolingual language models in six out of seven tested languages. We release language models in 50 languages translated from English and the source code here.
translated by 谷歌翻译
Recent cross-lingual cross-modal works attempt to extend Vision-Language Pre-training (VLP) models to non-English inputs and achieve impressive performance. However, these models focus only on understanding tasks utilizing encoder-only architecture. In this paper, we propose ERNIE-UniX2, a unified cross-lingual cross-modal pre-training framework for both generation and understanding tasks. ERNIE-UniX2 integrates multiple pre-training paradigms (e.g., contrastive learning and language modeling) based on encoder-decoder architecture and attempts to learn a better joint representation across languages and modalities. Furthermore, ERNIE-UniX2 can be seamlessly fine-tuned for varieties of generation and understanding downstream tasks. Pre-trained on both multilingual text-only and image-text datasets, ERNIE-UniX2 achieves SOTA results on various cross-lingual cross-modal generation and understanding tasks such as multimodal machine translation and multilingual visual question answering.
translated by 谷歌翻译
Speech representation learning has improved both speech understanding and speech synthesis tasks for single language. However, its ability in cross-lingual scenarios has not been explored. In this paper, we extend the pretraining method for cross-lingual multi-speaker speech synthesis tasks, including cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing. We propose a speech-text joint pretraining framework, where we randomly mask the spectrogram and the phonemes given a speech example and its transcription. By learning to reconstruct the masked parts of the input in different languages, our model shows great improvements over speaker-embedding-based multi-speaker TTS methods. Moreover, our framework is end-to-end for both the training and the inference without any finetuning effort. In cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing tasks, our experiments show that our model outperforms speaker-embedding-based multi-speaker TTS methods. The code and model are publicly available at PaddleSpeech.
translated by 谷歌翻译
Video-and-language pre-training has shown promising results for learning generalizable representations. Most existing approaches usually model video and text in an implicit manner, without considering explicit structural representations of the multi-modal content. We denote such form of representations as structural knowledge, which express rich semantics of multiple granularities. There are related works that propose object-aware approaches to inject similar knowledge as inputs. However, the existing methods usually fail to effectively utilize such knowledge as regularizations to shape a superior cross-modal representation space. To this end, we propose a Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge Regularizations. There are two key designs of ours: 1) a simple yet effective Structural Knowledge Prediction (SKP) task to pull together the latent representations of similar videos; and 2) a novel Knowledge-guided sampling approach for Contrastive Learning (KCL) to push apart cross-modal hard negative samples. We evaluate our method on four text-video retrieval tasks and one multi-choice QA task. The experiments show clear improvements, outperforming prior works by a substantial margin. Besides, we provide ablations and insights of how our methods affect the latent representation space, demonstrating the value of incorporating knowledge regularizations into video-and-language pre-training.
translated by 谷歌翻译
Covid-19-Pandemic继续在社交媒体上提出各种讨论或辩论的主题。为了探索大流行对人们生活的影响,了解公众对与大流行有关的实体(例如药物,疫苗)对社交媒体的关注和态度至关重要。但是,对现有命名实体识别(NER)或目标情感分析(TSA)数据集培训的模型具有有限的理解与COVID相关的社交媒体文本的能力有限,因为这些数据集并未从医学角度设计或注释。本文释放了Mets-COV,这是一种包含医疗实体的数据集和与COVID相关的推文中的目标情感。 Mets-COV包含10,000条带有7种实体的推文,包括4种医疗实体类型(疾病,药物,症状和疫苗)和3种通用实体类型(人,位置和组织)。为了进一步调查推文用户对特定实体的态度,选择了4种类型的实体(人,组织,药物和疫苗),并用用户情感注释,从而产生了具有9,101个实体(5,278个推文)的目标情感数据集。据我们所知,METS-COV是第一个收集与COVID相关推文的医疗实体和相应情感的数据集。我们通过广泛的实验对经典机器学习模型和最先进的深度学习模型进行基准测试。结果表明,该数据集在NER和TSA任务方面都有大量改进的空间。 METS-COV是开发更好的医学社交媒体工具并促进计算社会科学研究的重要资源,尤其是在流行病学方面。我们的数据,注释准则,基准模型和源代码公开可用(https://github.com/ylab-open/mets-cov),以确保可重复性。
translated by 谷歌翻译
通过深度学习技术的开花,完全有监督的基于骨架的动作识别取得了巨大进步。但是,这些方法需要足够的标记数据,这不容易获得。相比之下,基于自我监督的骨骼的动作识别引起了更多的关注。通过利用未标记的数据,可以学会更多可概括的功能来减轻过度拟合的问题并减少大规模标记的培训数据的需求。受到MAE的启发,我们提出了一个空间式蒙面的自动编码器框架,用于基于3D骨架的自我监管的动作识别(Skeletonmae)。在MAE的掩蔽和重建管道之后,我们利用基于骨架的编码器变压器体系结构来重建蒙版的骨架序列。一种新颖的掩蔽策略,称为时空掩蔽,是根据骨架序列的联合级别和框架级别引入的。这种预训练策略使编码器输出可推广的骨骼特征具有空间和时间依赖性。给定未掩盖的骨架序列,编码器用于动作识别任务。广泛的实验表明,我们的骨架达到了出色的性能,并优于NTU RGB+D和NTU RGB+D 120数据集的最新方法。
translated by 谷歌翻译
通过社交媒体评论预先培训的许多开放域对话模型都可以产生连贯的答复,但在与真实用户互动时会产生引人入胜的答复。这种现象可能主要是由于注释的人类对话的不足以及与人类偏爱的未对准。在本文中,我们提出了一种新颖而有效的方法,以增强开放域聊天机器人,其中有两种人类反馈(包括明确的演示和隐性偏好),并利用了。通过要求注释者选择或修改模型生成的候选响应,Diamante有效地收集了人类证明的响应并构建了中国聊天数据集。为了增强与人类偏好的一致性,Diamante利用数据收集过程中的隐含偏好,并引入了生成评估联合培训。全面的实验表明,Diamante数据集和联合培训范式可以显着提高中国预训练的对话模型的性能。
translated by 谷歌翻译